261 research outputs found

    The Cortex and the Critical Point

    Get PDF
    How the cerebral cortex operates near a critical phase transition point for optimum performance. Individual neurons have limited computational powers, but when they work together, it is almost like magic. Firing synchronously and then breaking off to improvise by themselves, they can be paradoxically both independent and interdependent. This happens near the critical point: when neurons are poised between a phase where activity is damped and a phase where it is amplified, where information processing is optimized, and complex emergent activity patterns arise. The claim that neurons in the cortex work best when they operate near the critical point is known as the criticality hypothesis. In this book John Beggs—one of the pioneers of this hypothesis—offers an introduction to the critical point and its relevance to the brain. Drawing on recent experimental evidence, Beggs first explains the main ideas underlying the criticality hypotheses and emergent phenomena. He then discusses the critical point and its two main consequences—first, scale-free properties that confer optimum information processing; and second, universality, or the idea that complex emergent phenomena, like that seen near the critical point, can be explained by relatively simple models that are applicable across species and scale. Finally, Beggs considers future directions for the field, including research on homeostatic regulation, quasicriticality, and the expansion of the cortex and intelligence. An appendix provides technical material; many chapters include exercises that use freely available code and data sets

    Neuroevolution on the Edge of Chaos

    Full text link
    Echo state networks represent a special type of recurrent neural networks. Recent papers stated that the echo state networks maximize their computational performance on the transition between order and chaos, the so-called edge of chaos. This work confirms this statement in a comprehensive set of experiments. Furthermore, the echo state networks are compared to networks evolved via neuroevolution. The evolved networks outperform the echo state networks, however, the evolution consumes significant computational resources. It is demonstrated that echo state networks with local connections combine the best of both worlds, the simplicity of random echo state networks and the performance of evolved networks. Finally, it is shown that evolution tends to stay close to the ordered side of the edge of chaos.Comment: To appear in Proceedings of the Genetic and Evolutionary Computation Conference 2017 (GECCO '17

    Being Critical of Criticality in the Brain

    Get PDF
    Relatively recent work has reported that networks of neurons can produce avalanches of activity whose sizes follow a power law distribution. This suggests that these networks may be operating near a critical point, poised between a phase where activity rapidly dies out and a phase where activity is amplified over time. The hypothesis that the electrical activity of neural networks in the brain is critical is potentially important, as many simulations suggest that information processing functions would be optimized at the critical point. This hypothesis, however, is still controversial. Here we will explain the concept of criticality and review the substantial objections to the criticality hypothesis raised by skeptics. Points and counter points are presented in dialog form

    A few strong connections: optimizing information retention in neuronal avalanches

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>How living neural networks retain information is still incompletely understood. Two prominent ideas on this topic have developed in parallel, but have remained somewhat unconnected. The first of these, the "synaptic hypothesis," holds that information can be retained in synaptic connection strengths, or weights, between neurons. Recent work inspired by statistical mechanics has suggested that networks will retain the most information when their weights are distributed in a skewed manner, with many weak weights and only a few strong ones. The second of these ideas is that information can be represented by stable activity patterns. Multineuron recordings have shown that sequences of neural activity distributed over many neurons are repeated above chance levels when animals perform well-learned tasks. Although these two ideas are compelling, no one to our knowledge has yet linked the predicted optimum distribution of weights to stable activity patterns actually observed in living neural networks.</p> <p>Results</p> <p>Here, we explore this link by comparing stable activity patterns from cortical slice networks recorded with multielectrode arrays to stable patterns produced by a model with a tunable weight distribution. This model was previously shown to capture central features of the dynamics in these slice networks, including neuronal avalanche cascades. We find that when the model weight distribution is appropriately skewed, it correctly matches the distribution of repeating patterns observed in the data. In addition, this same distribution of weights maximizes the capacity of the network model to retain stable activity patterns. Thus, the distribution that best fits the data is also the distribution that maximizes the number of stable patterns.</p> <p>Conclusions</p> <p>We conclude that local cortical networks are very likely to use a highly skewed weight distribution to optimize information retention, as predicted by theory. Fixed distributions impose constraints on learning, however. The network must have mechanisms for preserving the overall weight distribution while allowing individual connection strengths to change with learning.</p

    Stability of polarization singularities in disordered photonic crystal waveguides

    Get PDF
    The effects of short-range disorder on the polarization characteristics of light in photonic crystal waveguides were investigated using finite-difference time-domain simulations with a view to investigating the stability of polarization singularities. It was found that points of local circular polarization (C points) and contours of linear polarization (L lines) continued to appear even in the presence of high levels of disorder, and that they remained close to their positions in the ordered crystal. These results are a promising indication that devices exploiting polarization in these structures are viable given current fabrication standards

    Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model

    Get PDF
    Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons
    corecore